iT邦幫忙

2024 iThome 鐵人賽

DAY 28
0
Kubernetes

Think Again Kubernetes系列 第 28

利用 kind 探索 Layered cgroup

  • 分享至 

  • xImage
  •  

這個文章將用 kind 帶領大家探索 Layered cgroup

步驟

  • 創建 kind

    • 在設定檔中指定 worker node
  • 進入 worker node

    • 利用 docker exec 進入 worker node
  • 觀察 worker node 的 root cgroup, systemreserved, kubereserved

    • 利用 systemd-cgls 指令觀察 Layered cgroup
  • 觀察空的 Layered cgroup

  • Apply BustableQoS

    • 檢查 BustableQoS cgroup
  • Apply GuaranteedPod

    • 檢查 GuaranteedPod
  • Apply BesteffortQoS

    • 檢查 BesteffortQoS

root cgroup → /sys/fs/cgroup/

  • systemreserved → /sys/fs/cgroup/system.slice

    • runtime cgroup → /system.slice/containerd.service
  • kubereserved → /sys/fs/cgroup/kubelet.slice/kubelet.service

  • kubepods → /sys/fs/cgroup/kubelet.slice/kubelet-kubepods.slice

    • GuaranteedPod → /sys/fs/cgroup/kubelet.slice/kubelet-kubepods.slice/{pod_id}

      • container → /sys/fs/cgroup/kubelet.slice/kubelet-kubepods.slice/{pod_id}/{container_id}
    • BurstableQoS → /sys/fs/cgroup/kubelet.slice/kubelet-kubepods.slice/kubelet-kubepods-burstable.slice

      • BurstablePod → /sys/fs/cgroup/kubelet.slice/kubelet-kubepods.slice/kubelet-kubepods-burstable.slice/{pod_id}

        • Container → /sys/fs/cgroup/kubelet.slice/kubelet-kubepods.slice/kubelet-kubepods-burstable.slice/{pod_id}/{container_id}
    • BesteffortQoS → /sys/fs/cgroup/kubelet.slice/kubelet-kubepods.slice/kubelet-kubepods-besteffort.slice

      • BesteffortPod → /sys/fs/cgroup/kubelet.slice/kubelet-kubepods.slice/kubelet-kubepods-besteffort.slice/{pod_id}

        • Container → /sys/fs/cgroup/kubelet.slice/kubelet-kubepods.slice/kubelet-kubepods-besteffort.slice/{pod_id}/{container_id}

創建 Kind

在設定檔中指定 worker node

為了不讓 Control Plane components 干擾我們的測試,所以我們建立一個 worker node 然後把 Pod 都佈署上去。

建立一個設定檔 cluster.yaml

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
# One control plane node and three "workers".
#
# While these will not add more real compute capacity and
# have limited isolation, this can be useful for testing
# rolling updates etc.
#
# The API-server and other control plane components will be
# on the control-plane node.
#
# You probably don't need this unless you are testing Kubernetes itself.
nodes:
- role: control-plane
- role: worker

創建 Kind

由於準備的 Pod Manifest 裡面寫了 hostname,所以 cgroup-test 名稱要一致。

kind create cluster --config ./cluster.yaml --name cgroup-test

進入 worker node

找到作為 worker node 的 container

docker ps | grep cgroup-test-worker

https://ithelp.ithome.com.tw/upload/images/20241007/201691356Jd6wQH5vq.png

利用 docker exec 進入這個 worker node

雖然前面好像在找 ID,但是這邊用名字而不是 container id,主要是讓大家熟悉一下 kind 的架構。

docker exec -it cgroup-test-worker bash

https://ithelp.ithome.com.tw/upload/images/20241007/20169135zY0quu7JRm.png

觀察 cgroup

我們先用 crictl ps 看一下現在這臺機器有哪些 container

crictl ps

https://ithelp.ithome.com.tw/upload/images/20241007/20169135bkMT8rCN7p.png

我們可以觀察到這個 worker node 只有必要的 container

接下來用 systemd-cgls 觀察 cgroup

systemd-cgls

Overview

https://ithelp.ithome.com.tw/upload/images/20241007/201691354J57d8IfIT.png

root cgroup

https://ithelp.ithome.com.tw/upload/images/20241007/20169135KVuUpa7PgC.png

systemreserved cgroup

我們可以觀察到 containerd 被放在這邊

https://ithelp.ithome.com.tw/upload/images/20241007/20169135rhznwXh4cD.png

kubelet.service cgroup

kubelet 本身的 cgroup 被放在這

https://ithelp.ithome.com.tw/upload/images/20241007/20169135sKvsRm40WC.png

kubepods cgroup

https://ithelp.ithome.com.tw/upload/images/20241007/20169135X1by1XSjVe.png

QoS(besteffort, burstable) cgroup

https://ithelp.ithome.com.tw/upload/images/20241007/20169135PaOBLa5IDK.png

GuaranteedPod cgroup

https://ithelp.ithome.com.tw/upload/images/20241007/2016913512TOfeyCLx.png

我們觀察到了五層 cgroup,接下來佈署一些 Pod 看一下他們是怎麽變化的

佈署 Pod,觀察 cgroup 變化

佈署一個 burstable qos pod

kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: burstable-qos-pod
spec:
  nodeSelector:
    kubernetes.io/hostname: cgroup-test-worker
  containers:
  - name: burstable-qos-container
    image: busybox
    command: ["/bin/sh", "-c", "while true; do sleep 2; done"]
    resources:
      requests:
        cpu: 0.1
        memory: "100Mi"
EOF

我們利用 crictl pods 查詢 pod ID 後,利用 crictl inspectp 查詢 cgroup path

crictl pods

https://ithelp.ithome.com.tw/upload/images/20241007/2016913577Z7TGmDLz.png

crictl inspectp 7503633d561a0

https://ithelp.ithome.com.tw/upload/images/20241007/20169135gL00WRQVsB.png

再執行一次 systemd-cgls,會看到剛剛佈署的 burstable-qos-pod cgroup,被放在 kubelet-kubepods-burstable.slice 下面

https://ithelp.ithome.com.tw/upload/images/20241007/20169135GzqKWpnuiO.png

我們現在 cd 到這個檔案路徑看一下 cgroup 參數

cd /sys/fs/cgroup/kubelet.slice/kubelet-kubepods.slice/kubelet-kubepods-burstable.slice

https://ithelp.ithome.com.tw/upload/images/20241007/20169135S4bDSLfiUZ.png

可以看到 cpu.weight 以及 cpu.max 有值

https://ithelp.ithome.com.tw/upload/images/20241007/20169135sepd2RFsOO.png

https://ithelp.ithome.com.tw/upload/images/20241007/20169135DtnDsz8Q03.png

佈署第二個 burstable qos pod

接下來我們佈署第二個 burstable pod

kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: burstable-qos-pod-2
spec:
  nodeSelector:
    kubernetes.io/hostname: cgroup-test-worker
  containers:
  - name: burstable-qos-container
    image: busybox
    command: ["/bin/sh", "-c", "while true; do sleep 2; done"]
    resources:
      requests:
        cpu: 0.1
        memory: "100Mi"
EOF

用同樣的方式,看到 cpu.weight 以及 cpu.max 皆為有值

https://ithelp.ithome.com.tw/upload/images/20241007/20169135skDHLFjMDf.png

佈署 guaranteed-qos-pod

kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: guaranteed-qos-pod
spec:
  containers:
  - name: guaranteed-qos-pod
    image: busybox
    command: ["/bin/sh", "-c", "while true; do sleep 2; done"]
    resources:
      limits:
        memory: "512Mi"
        cpu: "1"
      requests:
        memory: "512Mi"
        cpu: "1"
EOF

從 crictl pods 找到 podid 後一樣用 crictl inspectp 找到 cgroup path

https://ithelp.ithome.com.tw/upload/images/20241007/20169135S3zFBFyTnY.png

我們可以看到,Guaranteed Pod cgroup 直接放在 kubepods 下面,沒有相關的 QoS cgroup

https://ithelp.ithome.com.tw/upload/images/20241007/20169135uUTm5wqO64.png

佈署 besteffort-qos-pod

kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: besteffort-qos-pod
spec:
  nodeSelector:
    kubernetes.io/hostname: cgroup-test-worker
  containers:
  - name: besteffort-qos-container
    image: busybox
    command: ["/bin/sh", "-c", "while true; do sleep 2; done"]
EOF

我們部屬的 besteffort Pod 被放在 besteffort QoS 下面

https://ithelp.ithome.com.tw/upload/images/20241007/201691358ZWNPMCFgL.png


Layerd cgroup 就介紹到這邊,明天開始討論 CPU Manager。


上一篇
Layerd cgroup
下一篇
我要獨佔 CPU:CPU Manager
系列文
Think Again Kubernetes31
圖片
  直播研討會
圖片
{{ item.channelVendor }} {{ item.webinarstarted }} |
{{ formatDate(item.duration) }}
直播中

尚未有邦友留言

立即登入留言